# Low resource consumption
Qwen3 30B A3B Gptq 8bit
Apache-2.0
Qwen3 30B A3B is a large language model that has undergone 8-bit quantization using the GPTQ method, suitable for efficient inference scenarios.
Large Language Model
Transformers

Q
btbtyler09
301
2
Smollm2 135M Eagle
Apache-2.0
A lightweight Russian-English bilingual language model fine-tuned based on SmolLM2-135M, with enhanced Russian processing capabilities but notable limitations
Large Language Model Supports Multiple Languages
S
nyuuzyou
50
3
Whisper Large V3 Turbo Quantized.w4a16
Apache-2.0
An INT4 weight quantization version based on openai/whisper-large-v3-turbo, supporting efficient audio-to-text tasks
Speech Recognition
Transformers English

W
RedHatAI
1,851
2
Gemma 3 12b It Q8 0 GGUF
This model is converted from google/gemma-3-12b-it to GGUF format, suitable for the llama.cpp framework.
Large Language Model
G
NikolayKozloff
89
1
Shuttle 3 Diffusion Fp8
Apache-2.0
An efficient text-to-image AI model that generates high-quality images in just 4 steps, supporting multiple hardware-optimized formats.
Image Generation English
S
shuttleai
1,119
26
VINE R Enc
MIT
An image watermark processing model based on SDXL-Turbo, focusing on image-to-image transformation tasks.
Image Generation
Safetensors English
V
Shilin-LU
730
0
Phi 3 Mini 4k Instruct Graph
Phi-3-mini-4k-instruct-graph is a fine-tuned version of Microsoft's Phi-3-mini-4k-instruct, specifically designed for entity relationship extraction from general text data, aiming to achieve comparable quality and accuracy to GPT-4 in generating entity relationship graphs.
Knowledge Graph
Transformers English

P
EmergentMethods
524
44
Whisper Large V3 Japanese 4k Steps Ct2
MIT
This is a CTranslate2 converted version of the OpenAI Whisper large-v3 model, specifically fine-tuned for Japanese with an additional 4,000 training steps, supporting multilingual speech recognition.
Speech Recognition Supports Multiple Languages
W
JhonVanced
54
4
Gerbil A 32m
Apache-2.0
Gerbil-A-32m is an A-grade model with 32 million parameters, trained on 640 million tokens, suitable for various natural language processing tasks.
Large Language Model
Transformers

G
GerbilLab
33
2
Ai Illustration
Openrail
An AI illustration generation model trained using Dreambooth technology, suitable for text-to-image tasks
Image Generation
A
sd-dreambooth-library
56
8
Vi Arcane V1
Openrail
Vi-(Arcane)-v1 is a Dreambooth model trained by asukii using TheLastBen's fast DreamBooth notebook, suitable for text-to-image generation tasks.
Image Generation
V
asukii
30
0
Autonlp Imdb Test 21134442
This is a binary classification model trained via AutoNLP for IMDB review sentiment analysis
Text Classification
Transformers English

A
mmcquade11
16
0
Featured Recommended AI Models